skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Huebner, Sarah"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Fortson, Lucy; Crowston, Kevin; Kloetzer, Laure; Ponti, Marisa (Ed.)
    Using public support to extract information from vast datasets has become a popular method for accurately labeling wildlife data in camera trap (CT) images. However, the increasing demand for volunteer effort lengthens the time interval between data collection and our ability to draw ecological inferences or perform data-driven conservation actions. Artificial intelligence (AI) approaches are currently highly effective for species detection (i.e., whether an image contains animals or not) and labeling common species; however, it performs poorly on species rarely captured in images and those that are highly visually similar to one another. To capitalize on the best of human and AI classifying methods, we developed an integrated CT data pipeline in which AI provides an initial pass on labeling images, but is supervised and validated by humans (i.e., a “human-in-the-loop” approach). To assess classification accuracy gains, we compare the precision of species labels produced by AI and HITL protocols to a “gold standard” (GS) dataset annotated by wildlife experts. The accuracy of the AI method was species-dependent and positively correlated with the number of training images. The combined efforts of HITL led to error rates of less than 10% for 73% of the dataset and lowered the error rates for an additional 23%. For two visually similar species, human input resulted in higher error rates than AI. While integrating humans in the loop increases classification times relative to AI alone, the gains in accuracy suggest that this method is highly valuable for high-volume CT surveys. 
    more » « less
  2. Competition, facilitation, and predation offer alternative explanations for successional patterns of migratory herbivores. However, these interactions are difficult to measure, leaving uncertainty about the mechanisms underlying body-size-dependent grazing—and even whether succession occurs at all. We used data from an 8-year camera-trap survey, GPS-collared herbivores, and fecal DNA metabarcoding to analyze the timing, arrival order, and interactions among migratory grazers in Serengeti National Park. Temporal grazing succession is characterized by a “push-pull” dynamic: Competitive grazing nudges zebra ahead of co-migrating wildebeest, whereas grass consumption by these large-bodied migrants attracts trailing, small-bodied gazelle that benefit from facilitation. “Natural experiments” involving intense wildfires and rainfall respectively disrupted and strengthened these effects. Our results highlight a balance between facilitative and competitive forces in co-regulating large-scale ungulate migrations. 
    more » « less
  3. null (Ed.)
    Camera traps - remote cameras that capture images of passing wildlife - have become a ubiquitous tool in ecology and conservation. Systematic camera trap surveys generate ‘Big Data’ across broad spatial and temporal scales, providing valuable information on environmental and anthropogenic factors affecting vulnerable wildlife populations. However, the sheer number of images amassed can quickly outpace researchers’ ability to manually extract data from these images (e.g., species identities, counts, and behaviors) in timeframes useful for making scientifically-guided conservation and management decisions. Here, we present ‘Snapshot Safari’ as a case study for merging citizen science and machine learning to rapidly generate highly accurate ecological Big Data from camera trap surveys. Snapshot Safari is a collaborative cross-continental research and conservation effort with 1500+ cameras deployed at over 40 eastern and southern Africa protected areas, generating millions of images per year. As one of the first and largest-scale camera trapping initiatives, Snapshot Safari spearheaded innovative developments in citizen science and machine learning. We highlight the advances made and discuss the issues that arose using each of these methods to annotate camera trap data. We end by describing how we combined human and machine classification methods (‘Crowd AI’) to create an efficient integrated data pipeline. Ultimately, by using a feedback loop in which humans validate machine learning predictions and machine learning algorithms are iteratively retrained on new human classifications, we can capitalize on the strengths of both methods of classification while mitigating the weaknesses. Using Crowd AI to quickly and accurately ‘unlock’ ecological Big Data for use in science and conservation is revolutionizing the way we take on critical environmental issues in the Anthropocene era. 
    more » « less
  4. null (Ed.)